Australia Turns the Spotlight on AI Chatbots: Demanding Child-Protection Details
Australia’s digital safety regulator has pushed major chatbot makers to open up about how they protect the most vulnerable users—namely, children. With AI-driven chat services proliferating, the country is now applying stricter scrutiny over how these platforms handle risks of self-harm, sexual exploitation and other unsafe content.
At the heart of the move: the eSafety Commissioner sent formal directions to four companies—Character Technologies (the maker of the celebrity-simulation chatbot service Character.AI), Glimpse.AI, Chai Research and Chub AI—requiring them to set out their child-protection systems and practices. ([Reuters][1])
Here’s what’s going on — and why it matters.
What’s happening
- The eSafety Commissioner is demanding details on how the companies prevent exposure of minors to child sexual exploitation, pornography, or material that promotes self-harm or eating disorders. ([Reuters][1])
- The notice comes as schools in Australia report children as young as 13 spending up to five hours a day interacting with chatbots—sometimes in sexually explicit ways, or forming emotional dependencies. ([Reuters][1])
- The regulatory power behind this: The eSafety Commissioner can compel firms to report internal safety measures. If companies don’t comply, they may face daily fines of up to A$825,000 (about US$536,000). ([Reuters][1])
- Notably, the regulator did not push its notice to ChatGPT-maker OpenAI. That’s because the crackdown is currently focused on “companion-based” chatbot services (those designed for more social/interactive uses), while ChatGPT is not yet covered by the relevant industry code in Australia (it becomes so in March 2026). ([Reuters][1])
- The move comes as Australia gears up for even tougher online safety regulations: from December, social media companies will be forced to deactivate or refuse accounts for users under 16—or face fines up to A$49.5 million—aimed at protecting young people’s mental and physical health. ([Reuters][1])
Why it matters
- Chatbots are evolving fast: Unlike simple scripted bots, modern services can hold realistic, adaptive conversations. That brings promise—and risk. As the Reuters article notes, “the realistic conversational abilities of such services have taken the world by storm” yet “have also fanned concern that a lack of guardrails exposes vulnerable individuals to dangerous content.” ([Reuters][1])
- Children are particularly exposed: With extended periods of use, chatbots become more than novelty—they can become emotional anchors or even gateways to unsafe ideas. The Australian regulator cites worrying scenarios: minors forming sexual or emotionally dependent relationships with chatbots, or being spurred to self-harm. ([Reuters][1])
- Regulation is catching up: The move shows a regulatory pivot—from focusing on traditional internet platforms (social media, websites) to emerging AI conversational tools. Australia’s approach here may set precedent for other jurisdictions grappling with similar risks.
- Business implications for AI firms: The demand for transparency and documented safety measures signals that AI companies should expect stricter oversight globally. Not just the technology, but the governance, auditing, and user-safeguarding procedures will come under microscope.
Key take-aways
- The eSafety Commissioner is flexing regulatory muscle: firms must now detail how they tackle child-safe risks.
- The focus is on chatbots that act like companions, not necessarily all AI tools—hence companies like Character Technologies and its rivals, not yet OpenAI/ChatGPT, are targeted.
- School reports of heavy usage among teens highlight the urgency: this isn’t hypothetical risk—it’s ongoing behaviour with real consequences.
- The broader regulatorycontext: Australia is moving toward a more aggressive regulatory stance around youth online safety (e.g., account-age restrictions on social platforms).
- For AI developers and platform operators: expect more obligations, public-reporting requirements, and potentially stiff penalties if child-safeguarding is weak.
Glossary
- Chatbot: A software application designed to conduct a conversation via text or voice interface, often powered by artificial-intelligence (AI) techniques.
- Companion-based chatbot: Here, a chatbot designed to mimic a conversational partner—often with emotional/social interaction, rather than purely transactional tasks.
- Industry code: A set of voluntary (or regulatory) guidelines that an industry agrees to follow; in Australia’s case, future regulation will cover certain AI systems under an industry code for online services.
- Guardrails: Safeguards, controls or policies implemented to prevent misuse or harmful outcomes of a technology.
- Self-harm material: Content that encourages, instructs or depicts self-harm behaviours (such as suicide, eating disorders) in a manner that could influence vulnerable individuals.
The takeaway
Australia’s regulator has fired a clear signal: as AI chatbots evolve and become more embedded in young people’s lives, the era of unregulated conversation machines is ending. Companies behind these services must now make visible how they are protecting children from sexual, self-harm and other harmful content—or face meaningful consequences.
This matters globally because the AI conversation-tool space is rapidly expanding and regulators elsewhere are watching. For parents, educators and stakeholders it’s a reminder: as bots become conversation partners, not just query tools, the stakes—especially for minors—are real.
Source: Reuters article “Australia tells AI chatbot companies to detail child-protection steps”. ([Reuters][1])
| [1]: https://www.reuters.com/world/asia-pacific/australia-tells-ai-chatbot-companies-detail-child-protection-steps-2025-10-22/ “Australia tells AI chatbot companies to detail child protection steps | Reuters” |